四足球运动正在迅速成熟到现在的机器人经常穿越各种非结构化的地形。然而,虽然通过从一系列预计算机样式中选择Gaits可以改变Gaits,但是当机器人处于运动中,当前规划仪不能连续地变化关键的步态参数。具有意外操作特性的综合,现有的Gaits,甚至是动态演习的混合延伸超出了当前最先进的能力。在这项工作中,我们通过学习捕获构成特定步态的关键姿态阶段的潜在空间来解决这种限制。这是通过在单个小跑风格上训练的生成模型来实现的,这鼓励解散,使得将驱动信号应用于潜在的单个维度,诱导合成连续各种跑步的整体计划。我们证明了驱动信号映射的特定性质直接映射到诸如Cadence,脚步高度和完全姿势持续时间的步态参数。由于我们的方法的性质,这些合成的Gaits在机器人操作期间在线在线持续变量,强大地捕获了显着超过培训期间看到的相对狭窄的行为的流动丰富性。此外,使用生成模型的使用促进了对扰动的检测和减轻,以提供多功能和坚固的规划框架。我们在真正的Quadruped机器人上评估我们的方法,并证明我们的方法实现了动态小跑风格的连续混合,同时对外部扰动具有鲁棒性和反应性。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We investigate how a shepherd should move in order to effectively herd and guide a flock of agents towards a target. Using a detailed agent-based model (ABM) for the members of the flock, we pose and solve an optimization problem for the shepherd that has to simultaneously work to keep the flock cohesive while coercing it towards a prescribed project. We find that three distinct strategies emerge as potential solutions as a function of just two parameters: the ratio of herd size to shepherd repulsion length and the ratio of herd speed to shepherd speed. We term these as: (i) mustering, in which the shepherd circles the herd to ensure compactness, (ii) droving, in which the shepherd chases the herd in a desired direction, and (iii) driving, a hitherto unreported strategy where the flock surrounds a shepherd that drives it from within. A minimal dynamical model for the size, shape and position of the herd captures the effective behavior of the ABM, and further allows us to characterize the different herding strategies in terms of the behavior of the shepherd that librates (mustering), oscillates (droving) or moves steadily (driving). All together, our study yields a simple and intuitive classification of herding strategies that ought to be of general interest in the context of controlling the collective behavior of active matter.
translated by 谷歌翻译
我们提供了证据表明,学到的密度功能理论(``dft')的力场已准备好进行基态催化剂发现。我们的关键发现是,尽管预测的力与地面真相有很大差异,但使用从超过50 \%的评估系统中使用RPBE功能的能量与使用RPBE功能相似或较低能量的力量的力量与使用RPBE功能相似或较低的力量放松。这具有令人惊讶的含义,即学习的潜力可能已经准备好在挑战性的催化系统中替换DFT,例如在Open Catalyst 2020数据集中发现的电位。此外,我们表明,在局部谐波能量表面上具有与目标DFT能量相同的局部谐波能量表面训练的力场也能够在50 \%的情况下找到较低或相似的能量结构。与在真实能量和力量训练的标准模型相比,这种``简易电位''的收敛步骤更少,这进一步加速了计算。它的成功说明了一个关键:即使模型具有高力误差,学到的电位也可以定位能量最小值。结构优化的主要要求仅仅是学到的电位具有正确的最小值。由于学到的电位与系统大小的速度快速且尺寸为线性,因此我们的结果开辟了快速找到大型系统基础状态的可能性。
translated by 谷歌翻译
在COVID-19大流行期间,在COVID-19诊断的紧急环境中进行的大量成像量导致临床CXR获取的差异很大。在所使用的CXR投影,添加图像注释以及临床图像的旋转程度和旋转程度中可以看到这种变化。图像分析社区试图通过开发自动化的CoVID-19诊断算法来减轻大流行期间过度拉伸放射学部门的负担,该诊断算法是CXR成像的输入。已利用大量公开的CXR数据集来改善CoVID-19诊断的深度学习算法。然而,公开可用数据集中临床可获得的CXR的可变质量可能会对算法性能产生深远的影响。 COVID-19可以通过图像标签等图像上的非动物特征的算法来推断诊断。这些成像快捷方式可能是数据集特定的,并限制了AI系统的概括性。因此,了解和纠正CXR图像中的关键潜在偏差是CXR图像分析之前的重要第一步。在这项研究中,我们提出了一种简单有效的逐步方法,以预处理Covid-19胸部X射线数据集以消除不希望的偏见。我们进行消融研究以显示每个单个步骤的影响。结果表明,使用我们提出的管道可以将基线共证检测算法的精度提高到13%。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译